-
Notifications
You must be signed in to change notification settings - Fork 29
Strip reasoning parts from message history (OpenAI fix) #61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
9990c1a to
1d093b6
Compare
The previous fix added 'reasoning.encrypted_content' to the include option, but the root cause was that reasoning parts from history were being sent back to OpenAI's Responses API. When reasoning parts are included in messages sent to OpenAI, the SDK creates separate reasoning items with IDs (e.g., rs_*). These orphaned reasoning items cause errors: 'Item rs_* of type reasoning was provided without its required following item.' Solution: Strip reasoning parts from CmuxMessages BEFORE converting to ModelMessages. Reasoning content is only for display/debugging and should never be sent back to the API in subsequent turns. This happens in filterEmptyAssistantMessages() which runs before convertToModelMessages(), ensuring reasoning parts never reach the API.
Per Anthropic documentation, reasoning content SHOULD be sent back to Anthropic models via the sendReasoning option (defaults to true). However, OpenAI's Responses API uses encrypted reasoning items (IDs like rs_*) that are managed automatically via previous_response_id. Anthropic-style text-based reasoning parts sent to OpenAI create orphaned reasoning items that cause 'reasoning without following item' errors. Changes: - Reverted filterEmptyAssistantMessages() to only filter reasoning-only messages - Added new stripReasoningForOpenAI() function for OpenAI-specific stripping - Apply reasoning stripping only for OpenAI provider in aiService.ts - Added detailed comments explaining the provider-specific differences
ammario
added a commit
that referenced
this pull request
Oct 7, 2025
## Problem Users are experiencing intermittent OpenAI API errors when using reasoning models with tool calls: - `Item 'rs_*' of type 'reasoning' was provided without its required following item` - `referenced reasoning on a function_call was not provided` The previous fix (PR #61) stripped reasoning parts entirely, but this caused new errors and was too aggressive. ## Root Cause OpenAI's Responses API uses encrypted reasoning items (IDs like `rs_*`) that are managed automatically via `previous_response_id`. When provider metadata from stored history is sent back to OpenAI, it references reasoning items that no longer exist in the current context, causing API errors. ## Solution Instead of stripping reasoning content, we now **blank out provider metadata** on all content parts for OpenAI: - Clear `providerMetadata` on text and reasoning parts - Clear `callProviderMetadata` on tool-call parts This preserves the reasoning content (which is useful for debugging and context) while preventing stale metadata references from causing errors. ## Changes 1. **New function**: `clearProviderMetadataForOpenAI()` - operates on `ModelMessage[] 3. **Fixed**: `splitMixedContentMessages()` now treats reasoning parts as text parts (they stay together) 4. **Updated**: Tests to reflect that reasoning parts are preserved, not stripped ## References - Vercel AI SDK Issue: vercel/ai#7099 - User solution: https://github.com/gvkhna/vibescraper/blob/f476c768266385affec3b5972790ef7b111da366/packages/website/src/assistant-ai/assistant-prepare-context.ts#L104 ## Testing - ✅ All message transform tests passing - ✅ Reasoning parts preserved in both OpenAI and Anthropic flows - ✅ Tool calls work correctly with reasoning - ✅ Formatting checks pass
This was referenced Oct 7, 2025
ammario
added a commit
that referenced
this pull request
Oct 7, 2025
This PR fixes the intermittent OpenAI API error using Vercel AI SDK's
middleware pattern to intercept and transform messages before
transmission.
## Problem
OpenAI's Responses API intermittently returns this error during
streaming:
```
Item 'rs_*' of type 'reasoning' was provided without its required following item
```
The error occurs during **multi-step tool execution** when:
- Model generates reasoning + tool calls
- SDK automatically executes tools and prepares next step
- Tool-call parts contain OpenAI item IDs that reference reasoning items
- When reasoning is stripped but tool-call IDs remain, OpenAI rejects
the malformed input
## Root Cause
OpenAI's Responses API uses internal item IDs (stored in
`providerOptions.openai.itemId`) to link:
- Reasoning items (`rs_*`)
- Function call items (`fc_*`)
When the SDK reconstructs conversation history for multi-step execution:
1. Assistant message includes `[reasoning, tool-call]` parts
2. Tool-call has `providerOptions.openai.itemId: "fc_*"` referencing
`rs_*`
3. Previous middleware stripped reasoning but left tool-call with
dangling reference
4. OpenAI API rejects: "function_call fc_* was provided without its
required reasoning item rs_*"
## Solution
Enhanced **OpenAI reasoning middleware** to strip item IDs when removing
reasoning:
**File: `src/utils/ai/openaiReasoningMiddleware.ts`**
1. Detects assistant messages containing reasoning parts
2. Filters out reasoning parts (OpenAI manages via `previousResponseId`)
3. **NEW:** Strips `providerOptions.openai` from remaining parts to
remove item IDs
4. Prevents dangling references that cause API errors
**Applied in: `src/services/aiService.ts`**
- Wraps OpenAI models with `wrapLanguageModel({ model, middleware })`
- Middleware intercepts messages before API transmission
- Only affects OpenAI (not Anthropic or other providers)
## Testing Results
Tested against real chat history that reliably reproduced the error:
✅ **Turn 1: PASSED** - Previously failed 100% of the time, now succeeds
✅ **Turn 2: PASSED** - Multi-step tool execution works correctly
The middleware successfully:
- Stripped 15 OpenAI item IDs from tool-call parts (Turn 1)
- Stripped 15 OpenAI item IDs from tool-call parts (Turn 2)
- Allowed multi-step tool execution without reasoning errors
## Technical Details
**Multi-step execution flow:**
1. User sends message
2. Model generates reasoning + tool calls (Step 1)
3. SDK auto-executes tools
4. SDK prepares Step 2 input: `[system, user,
assistant(reasoning+tools), tool-results]`
5. Middleware strips reasoning + item IDs before sending
6. Step 2 proceeds without API errors
**Why this fixes it:**
- OpenAI Responses API validates item ID references on input
- Removing `providerOptions.openai.itemId` prevents validation errors
- OpenAI tracks context via `previousResponseId`, not message content
- SDK's automatic tool execution works correctly with cleaned messages
## Files Changed
- `src/services/aiService.ts`: Apply middleware to OpenAI models (7
lines)
- `src/utils/ai/openaiReasoningMiddleware.ts`: New middleware with item
ID stripping (112 lines)
## Related Issues
- Fixes OpenAI reasoning errors from vercel/ai SDK issues #7099, #8031,
#8977
- Supersedes previous approaches (PR #61, #68) that didn't use SDK
middleware
_Generated with `cmux`_
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Problem
After merging PR #59, the OpenAI reasoning error still occurred:
Root Cause Analysis
The issue was OpenAI-specific. Anthropic reasoning and OpenAI reasoning work differently:
Anthropic
sendReasoning: trueoption, which defaults to true)OpenAI
rs_*)previous_response_idSolution
Strip reasoning parts ONLY for OpenAI, before converting CmuxMessages to ModelMessages.
stripReasoningForOpenAI()function for OpenAI-specific processingaiService.tssendReasoning)Changes
stripReasoningForOpenAI()inmodelMessageTransform.tsaiService.ts: Apply reasoning stripping only for OpenAI providerfilterEmptyAssistantMessages()no longer strips all reasoningThis ensures: